Handle already-running containers and port conflicts#40
Handle already-running containers and port conflicts#40carole-lavillonniere wants to merge 4 commits intomainfrom
Conversation
|
@coderabbitai review |
e3ea894 to
cb71ed1
Compare
📝 WalkthroughWalkthroughImplements a staged container startup pipeline: select containers to start (skip running / check ports), pre-pull images, validate licenses per container, then start containers with readiness checks; also extends container config with Tag and ProductName and updates integration tests to use shared test scaffolding. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/container/start.go`:
- Around line 57-75: The token is currently retrieved before
selectContainersToStart which prevents early-return when no containers need
starting; move the token retrieval to after selectContainersToStart and the
len(containers)==0 check so lstk can fast-exit without failing on missing token.
Update the flow in start.go: call selectContainersToStart(...) first, if
len(containers)==0 return nil, then fetch the token (the existing token
retrieval logic), then ensure the filtered containers (the returned containers
value) are injected/associated with the token before calling pullImages(...),
validateLicenses(..., token) and startContainers(...). Adjust any code that
passes cfg.Containers or containers to validateLicenses to use the token-aware
container set.
cb71ed1 to
73f3a09
Compare
silv-io
left a comment
There was a problem hiding this comment.
LGTM! Just some thoughts that popped up
internal/container/start.go
Outdated
| return nil | ||
| } | ||
|
|
||
| func selectContainersToStart(ctx context.Context, rt runtime.Runtime, sink output.Sink, containers []runtime.ContainerConfig, cfgContainers []config.ContainerConfig) ([]runtime.ContainerConfig, []config.ContainerConfig, error) { |
There was a problem hiding this comment.
thought: that's a long line. Wild that gofmt doesn't split it up :) Maybe gofumpt would?
There was a problem hiding this comment.
Yeah gofmt does not care about line length and leave it up to the coder to wrap.
So if we want it we'd need to introduce another tool.
internal/container/start.go
Outdated
| var filtered []runtime.ContainerConfig | ||
| var filteredCfg []config.ContainerConfig |
There was a problem hiding this comment.
thought: I wonder if we could somehow compose those together so that we don't have to handle them separately, like having runtime.ContainerConfig contain config.ContainerConfig, somewhat like this:
type ContainerConfig struct {
Image string
Name string
Port string
HealthPath string
Env []string // e.g., ["KEY=value", "FOO=bar"]
*config.ContainerConfig
}
Not sure if it wouldn't make things annoying somewhere else though :)
There was a problem hiding this comment.
That's a fair point, it looks quite complex at the moment.
That suggestion would require internal/runtime to import internal/config though. It could cause import cycle.
What about this instead? 0227242
I added Tag and ProductName to runtime.ContainerConfig
internal/container/start.go
Outdated
| func checkPortAvailable(port string) error { | ||
| conn, err := net.DialTimeout("tcp", "localhost:"+port, time.Second) | ||
| if err != nil { | ||
| return nil | ||
| } | ||
| err = conn.Close() | ||
| if err != nil { | ||
| return nil | ||
| } | ||
| return fmt.Errorf("port %s already in use", port) | ||
| } |
There was a problem hiding this comment.
thought: I wonder if it makes sense to have that somewhere in a util/x, but we might only need it here.
There was a problem hiding this comment.
Makes sense, moved it to internal/ports f4f420e
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
test/integration/start_test.go (1)
106-124: Same environment variable consistency issue; consider binding to localhost explicitly.Same suggestion as above regarding
env.With()usage.The static analysis warning about binding to all interfaces is a false positive in this test context—you intentionally need to occupy the port. However, binding to
127.0.0.1:4566instead of:4566would be slightly more precise sincecheckPortAvailableinstart.godialslocalhost.♻️ Optional: more precise binding
- ln, err := net.Listen("tcp", ":4566") + ln, err := net.Listen("tcp", "127.0.0.1:4566")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/start_test.go` around lines 106 - 124, In TestStartCommandFailsWhenPortInUse update the port binding to be explicit to localhost by changing the net.Listen call in TestStartCommandFailsWhenPortInUse from ":4566" to "127.0.0.1:4566" so it matches how checkPortAvailable dials localhost; also make the environment-setting consistent by replacing the manual append to cmd.Env (cmd.Env = append(os.Environ(), "LOCALSTACK_AUTH_TOKEN=fake-token")) with the project’s env helper (e.g., env.With or equivalent used elsewhere) to keep env construction consistent across tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/container/start.go`:
- Around line 157-167: In checkPortAvailable, when net.DialTimeout succeeds it
indicates the port is in use, so always return the "port %s already in use"
error regardless of conn.Close() result; do not return nil on a Close error.
Update the function (checkPortAvailable) to defer or call conn.Close() but
ignore or log any Close error and then return fmt.Errorf("port %s already in
use", port) after a successful DialTimeout, ensuring Dial success drives the
error path.
In `@test/integration/start_test.go`:
- Around line 88-104: The test TestStartCommandDoesNothingWhenAlreadyRunning
uses append(os.Environ(), ...) and the wrong env var name; replace that with the
project's env helper so the CLI reads the correct token: use
env.With(env.AuthToken, "fake-token") for the cmd.Env setup (the same approach
used in other tests), i.e. update the exec.CommandContext invocation in
TestStartCommandDoesNothingWhenAlreadyRunning to set cmd.Env via env.With and
reference env.AuthToken instead of "LOCALSTACK_AUTH_TOKEN" so startTestContainer
and the CLI see the correct LSTK_AUTH_TOKEN value.
---
Duplicate comments:
In `@internal/container/start.go`:
- Around line 27-30: Token is still fetched before selecting containers; change
the flow in start.go so selectContainersToStart(...) is called first and its
result examined, and only if there are containers to start call a.GetToken(ctx)
and proceed; specifically, move the GetToken call to after
selectContainersToStart and guard token retrieval and any subsequent operations
on token by checking that the returned list from selectContainersToStart is
non-empty (use the selectContainersToStart return value and the GetToken method
names to locate where to reorder).
---
Nitpick comments:
In `@test/integration/start_test.go`:
- Around line 106-124: In TestStartCommandFailsWhenPortInUse update the port
binding to be explicit to localhost by changing the net.Listen call in
TestStartCommandFailsWhenPortInUse from ":4566" to "127.0.0.1:4566" so it
matches how checkPortAvailable dials localhost; also make the
environment-setting consistent by replacing the manual append to cmd.Env
(cmd.Env = append(os.Environ(), "LOCALSTACK_AUTH_TOKEN=fake-token")) with the
project’s env helper (e.g., env.With or equivalent used elsewhere) to keep env
construction consistent across tests.
internal/container/start.go
Outdated
| func checkPortAvailable(port string) error { | ||
| conn, err := net.DialTimeout("tcp", "localhost:"+port, time.Second) | ||
| if err != nil { | ||
| return nil | ||
| } | ||
| err = conn.Close() | ||
| if err != nil { | ||
| return nil | ||
| } | ||
| return fmt.Errorf("port %s already in use", port) | ||
| } |
There was a problem hiding this comment.
Bug: returns nil when conn.Close() fails, masking the port-in-use condition.
If DialTimeout succeeds (meaning the port is in use) but conn.Close() subsequently fails, the function incorrectly returns nil instead of the "port already in use" error. The close error is irrelevant to the port availability check.
🐛 Proposed fix
func checkPortAvailable(port string) error {
conn, err := net.DialTimeout("tcp", "localhost:"+port, time.Second)
if err != nil {
return nil
}
- err = conn.Close()
- if err != nil {
- return nil
- }
+ _ = conn.Close()
return fmt.Errorf("port %s already in use", port)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func checkPortAvailable(port string) error { | |
| conn, err := net.DialTimeout("tcp", "localhost:"+port, time.Second) | |
| if err != nil { | |
| return nil | |
| } | |
| err = conn.Close() | |
| if err != nil { | |
| return nil | |
| } | |
| return fmt.Errorf("port %s already in use", port) | |
| } | |
| func checkPortAvailable(port string) error { | |
| conn, err := net.DialTimeout("tcp", "localhost:"+port, time.Second) | |
| if err != nil { | |
| return nil | |
| } | |
| _ = conn.Close() | |
| return fmt.Errorf("port %s already in use", port) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/container/start.go` around lines 157 - 167, In checkPortAvailable,
when net.DialTimeout succeeds it indicates the port is in use, so always return
the "port %s already in use" error regardless of conn.Close() result; do not
return nil on a Close error. Update the function (checkPortAvailable) to defer
or call conn.Close() but ignore or log any Close error and then return
fmt.Errorf("port %s already in use", port) after a successful DialTimeout,
ensuring Dial success drives the error path.
| func TestStartCommandDoesNothingWhenAlreadyRunning(t *testing.T) { | ||
| requireDocker(t) | ||
| cleanup() | ||
| t.Cleanup(cleanup) | ||
|
|
||
| ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) | ||
| defer cancel() | ||
|
|
||
| startTestContainer(t, ctx) | ||
|
|
||
| cmd := exec.CommandContext(ctx, binaryPath(), "start") | ||
| cmd.Env = append(os.Environ(), "LOCALSTACK_AUTH_TOKEN=fake-token") | ||
| output, err := cmd.CombinedOutput() | ||
|
|
||
| require.NoError(t, err, "lstk start should succeed when container is already running: %s", output) | ||
| assert.Contains(t, string(output), "already running") | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for env helper and AuthToken definition
fd -t f "env" test/integration/ | head -20Repository: localstack/lstk
Length of output: 86
🏁 Script executed:
#!/bin/bash
# Look for the env package structure
find test/integration -type f -name "*.go" | xargs grep -l "env\\.With\\|env\\.AuthToken" | head -10Repository: localstack/lstk
Length of output: 153
🏁 Script executed:
#!/bin/bash
# Search for AuthToken definition in the env helper
rg -n "AuthToken\s*=" test/integration/ --type goRepository: localstack/lstk
Length of output: 41
🏁 Script executed:
#!/bin/bash
# Check if the start command uses Bubble Tea
rg -n "Bubble Tea|bubbletea" --type go | head -20Repository: localstack/lstk
Length of output: 512
🏁 Script executed:
#!/bin/bash
# Check other tests in start_test.go to see env.With usage
head -100 test/integration/start_test.go | rg -n "env\\.With\\|os\\.Environ"Repository: localstack/lstk
Length of output: 41
🏁 Script executed:
#!/bin/bash
# Read the env helper file to see what AuthToken maps to
cat test/integration/env/env.goRepository: localstack/lstk
Length of output: 1110
🏁 Script executed:
#!/bin/bash
# Check the start_test.go file to see env.With and os.Environ patterns
cat test/integration/start_test.goRepository: localstack/lstk
Length of output: 4026
🏁 Script executed:
#!/bin/bash
# Search for where LOCALSTACK_AUTH_TOKEN or similar env vars are used
rg -n "LOCALSTACK_AUTH_TOKEN|LSTK_AUTH_TOKEN" --type goRepository: localstack/lstk
Length of output: 937
🏁 Script executed:
#!/bin/bash
# Check if start command has Bubble Tea UI
rg -n "tea\\.|bubbletea" cmd/ internal/ --type go | grep -i startRepository: localstack/lstk
Length of output: 41
Use env.With() for environment variable consistency and correct env var name.
This test (and the one at line 106) use append(os.Environ(), "LOCALSTACK_AUTH_TOKEN=fake-token"), inconsistent with earlier tests in the file that use env.With(). More importantly, LOCALSTACK_AUTH_TOKEN is incorrect—that's the variable the CLI passes to the container. The CLI reads from LSTK_AUTH_TOKEN (via env.AuthToken). Use env.With(env.AuthToken, "fake-token") instead.
♻️ Suggested fix
- cmd.Env = append(os.Environ(), "LOCALSTACK_AUTH_TOKEN=fake-token")
+ cmd.Env = env.With(env.AuthToken, "fake-token")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func TestStartCommandDoesNothingWhenAlreadyRunning(t *testing.T) { | |
| requireDocker(t) | |
| cleanup() | |
| t.Cleanup(cleanup) | |
| ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) | |
| defer cancel() | |
| startTestContainer(t, ctx) | |
| cmd := exec.CommandContext(ctx, binaryPath(), "start") | |
| cmd.Env = append(os.Environ(), "LOCALSTACK_AUTH_TOKEN=fake-token") | |
| output, err := cmd.CombinedOutput() | |
| require.NoError(t, err, "lstk start should succeed when container is already running: %s", output) | |
| assert.Contains(t, string(output), "already running") | |
| } | |
| func TestStartCommandDoesNothingWhenAlreadyRunning(t *testing.T) { | |
| requireDocker(t) | |
| cleanup() | |
| t.Cleanup(cleanup) | |
| ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) | |
| defer cancel() | |
| startTestContainer(t, ctx) | |
| cmd := exec.CommandContext(ctx, binaryPath(), "start") | |
| cmd.Env = env.With(env.AuthToken, "fake-token") | |
| output, err := cmd.CombinedOutput() | |
| require.NoError(t, err, "lstk start should succeed when container is already running: %s", output) | |
| assert.Contains(t, string(output), "already running") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/start_test.go` around lines 88 - 104, The test
TestStartCommandDoesNothingWhenAlreadyRunning uses append(os.Environ(), ...) and
the wrong env var name; replace that with the project's env helper so the CLI
reads the correct token: use env.With(env.AuthToken, "fake-token") for the
cmd.Env setup (the same approach used in other tests), i.e. update the
exec.CommandContext invocation in TestStartCommandDoesNothingWhenAlreadyRunning
to set cmd.Env via env.With and reference env.AuthToken instead of
"LOCALSTACK_AUTH_TOKEN" so startTestContainer and the CLI see the correct
LSTK_AUTH_TOKEN value.
Before pulling images or validating licenses,
lstk startnow checks the state of each configured container:"<name> is already running"and skips it. If all containers are already running, the command exits successfully without doing any work."port <N> already in use"with a hint pointing to the config file.